弱监督的视频对象本地化(WSVOL)允许仅使用全局视频标签(例如对象类)在视频中找到对象。最先进的方法依赖于多个独立阶段,其中最初的时空建议是使用视觉和运动提示生成的,然后确定和完善了突出的对象。本地化是通过在一个或多个视频上解决优化问题来完成的,并且视频标签通常用于视频集群。这需要每件型号或每类制造代价高昂的推理。此外,由于无监督的运动方法(如光流)或视频标签是从优化中丢弃的,因此本地化区域不是必需的判别。在本文中,我们利用成功的类激活映射(CAM)方法,该方法是基于静止图像而设计的。引入了一种新的时间凸轮(TCAM)方法,以训练一种判别深度学习(DL)模型,以使用称为CAM-Temporal Max Max Pooling(CAM-TMP)的聚集机制在视频中利用时空信息,而不是连续的凸轮。特别是,感兴趣区域的激活(ROI)是从审计的CNN分类器生成的CAM中收集的,以构建Pseudo-Labels构建用于训练DL模型的伪标记。此外,使用全局无监督的尺寸约束和诸如CRF之类的局部约束来产生更准确的凸轮。对单个独立帧的推断允许并行处理框架片段和实时定位。在两个挑战性的YouTube-Objects数据集上进行无限制视频的广泛实验,表明CAM方法(在独立框架上训练)可以产生不错的定位精度。我们提出的TCAM方法在WSVOL准确性方面达到了新的艺术品,并且视觉结果表明它可以适用于后续任务,例如视觉对象跟踪和检测。代码公开可用。
translated by 谷歌翻译
仅使用诸如图像类标签的全局注释,弱监督学习方法允许CNN分类器共同分类图像,并产生与预测类相关的感兴趣区域。然而,在像素水平的任何引导下,这种方法可以产生不准确的区域。已知该问题与组织学图像更具挑战,而不是与天然自然的图像,因为物体不太突出,结构具有更多变化,并且前景和背景区域具有更强的相似之处。因此,用于CNNS的视觉解释的计算机视觉文献中的方法可能无法直接适用。在这项工作中,我们提出了一种基于复合损耗功能的简单而有效的方法,可利用完全消极样本的信息。我们的新损失函数包含两个补充项:第一次利用CNN分类器收集的积极证据,而第二个利用来自CNN分类器的积极证据,而第二个互联网将利用来自训练数据集的完全消极样本。特别是,我们用解码器装备预先训练的分类器,该解码器允许精制感兴趣的区域。利用相同的分类器来收集像素电平的正面和负证据,以培训解码器。这使得能够利用自然地发生在数据中的完全消极样本,而没有任何额外的监督信号,并且仅使用图像类作为监督。与几种相关方法相比,在冒号癌的公共基准GLAS和使用三种不同的骨架的CONELYON16基于乳腺癌的CAMELYON16基准测试,我们展示了我们方法引入的大量改进。我们的结果表明了使用负数和积极证据的好处,即,从分类器获得的效益以及在数据集中自然可用的那个。我们对这两种术语进行了消融研究。我们的代码公开提供。
translated by 谷歌翻译
使用深度学习模型从组织学数据中诊断癌症提出了一些挑战。这些图像中关注区域(ROI)的癌症分级和定位通常依赖于图像和像素级标签,后者需要昂贵的注释过程。深度弱监督的对象定位(WSOL)方法为深度学习模型的低成本培训提供了不同的策略。仅使用图像级注释,可以训练这些方法以对图像进行分类,并为ROI定位进行分类类激活图(CAM)。本文综述了WSOL的​​最先进的DL方法。我们提出了一种分类法,根据模型中的信息流,将这些方法分为自下而上和自上而下的方法。尽管后者的进展有限,但最近的自下而上方法目前通过深层WSOL方法推动了很多进展。早期作品的重点是设计不同的空间合并功能。但是,这些方法达到了有限的定位准确性,并揭示了一个主要限制 - 凸轮的不足激活导致了高假阴性定位。随后的工作旨在减轻此问题并恢复完整的对象。评估和比较了两个具有挑战性的组织学数据集的分类和本地化准确性,对我们的分类学方法进行了评估和比较。总体而言,结果表明定位性能差,特别是对于最初设计用于处理自然图像的通用方法。旨在解决组织学数据挑战的方法产生了良好的结果。但是,所有方法都遭受高假阳性/阴性定位的影响。在组织学中应用深WSOL方法的应用是四个关键的挑战 - 凸轮的激活下/过度激活,对阈值的敏感性和模型选择。
translated by 谷歌翻译
Transformer language models (TLMs) are critical for most NLP tasks, but they are difficult to create for low-resource languages because of how much pretraining data they require. In this work, we investigate two techniques for training monolingual TLMs in a low-resource setting: greatly reducing TLM size, and complementing the masked language modeling objective with two linguistically rich supervised tasks (part-of-speech tagging and dependency parsing). Results from 7 diverse languages indicate that our model, MicroBERT, is able to produce marked improvements in downstream task evaluations relative to a typical monolingual TLM pretraining approach. Specifically, we find that monolingual MicroBERT models achieve gains of up to 18% for parser LAS and 11% for NER F1 compared to a multilingual baseline, mBERT, while having less than 1% of its parameter count. We conclude reducing TLM parameter count and using labeled data for pretraining low-resource TLMs can yield large quality benefits and in some cases produce models that outperform multilingual approaches.
translated by 谷歌翻译
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译
Large language models have ushered in a golden age of semantic parsing. The seq2seq paradigm allows for open-schema and abstractive attribute and relation extraction given only small amounts of finetuning data. Language model pretraining has simultaneously enabled great strides in natural language inference, reasoning about entailment and implication in free text. These advances motivate us to construct ImPaKT, a dataset for open-schema information extraction, consisting of around 2500 text snippets from the C4 corpus, in the shopping domain (product buying guides), professionally annotated with extracted attributes, types, attribute summaries (attribute schema discovery from idiosyncratic text), many-to-one relations between compound and atomic attributes, and implication relations. We release this data in hope that it will be useful in fine tuning semantic parsers for information extraction and knowledge base construction across a variety of domains. We evaluate the power of this approach by fine-tuning the open source UL2 language model on a subset of the dataset, extracting a set of implication relations from a corpus of product buying guides, and conducting human evaluations of the resulting predictions.
translated by 谷歌翻译
Large language models can perform new tasks in a zero-shot fashion, given natural language prompts that specify the desired behavior. Such prompts are typically hand engineered, but can also be learned with gradient-based methods from labeled data. However, it is underexplored what factors make the prompts effective, especially when the prompts are natural language. In this paper, we investigate common attributes shared by effective prompts. We first propose a human readable prompt tuning method (F LUENT P ROMPT) based on Langevin dynamics that incorporates a fluency constraint to find a diverse distribution of effective and fluent prompts. Our analysis reveals that effective prompts are topically related to the task domain and calibrate the prior probability of label words. Based on these findings, we also propose a method for generating prompts using only unlabeled data, outperforming strong baselines by an average of 7.0% accuracy across three tasks.
translated by 谷歌翻译
Chain-of-Thought (CoT) prompting can dramatically improve the multi-step reasoning abilities of large language models (LLMs). CoT explicitly encourages the LLM to generate intermediate rationales for solving a problem, by providing a series of reasoning steps in the demonstrations. Despite its success, there is still little understanding of what makes CoT prompting effective and which aspects of the demonstrated reasoning steps contribute to its performance. In this paper, we show that CoT reasoning is possible even with invalid demonstrations - prompting with invalid reasoning steps can achieve over 80-90% of the performance obtained using CoT under various metrics, while still generating coherent lines of reasoning during inference. Further experiments show that other aspects of the rationales, such as being relevant to the query and correctly ordering the reasoning steps, are much more important for effective CoT reasoning. Overall, these findings both deepen our understanding of CoT prompting, and open up new questions regarding LLMs' capability to learn to reason in context.
translated by 谷歌翻译
Although large language models can be prompted for both zero- and few-shot learning, performance drops significantly when no demonstrations are available. In this paper, we introduce Z-ICL, a new zero-shot method that closes the gap by constructing pseudo-demonstrations for a given test input using a raw text corpus. Concretely, pseudo-demonstrations are constructed by (1) finding the nearest neighbors to the test input from the corpus and pairing them with random task labels, and (2) applying a set of techniques to reduce the amount of direct copying the model does from the resulting demonstrations. Evaluation on nine classification datasets shows that Z-ICL outperforms previous zero-shot methods by a significant margin, and is on par with in-context learning with labeled training data in the few-shot setting. Overall, Z-ICL provides a significantly higher estimate of the zero-shot performance levels of a model, and supports future efforts to develop better pseudo-demonstrations that further improve zero-shot results.
translated by 谷歌翻译
Scaling up language models has led to unprecedented performance gains, but little is understood about how the training dynamics change as models get larger. How do language models of different sizes learn during pre-training? Why do larger language models demonstrate more desirable behaviors? In this paper, we analyze the intermediate training checkpoints of differently sized OPT models (Zhang et al.,2022)--from 125M to 175B parameters--on next-token prediction, sequence-level generation, and downstream tasks. We find that 1) at a given perplexity and independent of model sizes, a similar subset of training tokens see the most significant reduction in loss, with the rest stagnating or showing double-descent behavior; 2) early in training, all models learn to reduce the perplexity of grammatical sequences that contain hallucinations, with small models halting at this suboptimal distribution and larger ones eventually learning to assign these sequences lower probabilities; 3) perplexity is a strong predictor of in-context learning performance on 74 multiple-choice tasks from BIG-Bench, and this holds independent of the model size. Together, these results show that perplexity is more predictive of model behaviors than model size or training computation.
translated by 谷歌翻译